61 research outputs found

    HINT: Hierarchical Invertible Neural Transport for Density Estimation and Bayesian Inference

    Full text link
    A large proportion of recent invertible neural architectures is based on a coupling block design. It operates by dividing incoming variables into two sub-spaces, one of which parameterizes an easily invertible (usually affine) transformation that is applied to the other. While the Jacobian of such a transformation is triangular, it is very sparse and thus may lack expressiveness. This work presents a simple remedy by noting that (affine) coupling can be repeated recursively within the resulting sub-spaces, leading to an efficiently invertible block with dense triangular Jacobian. By formulating our recursive coupling scheme via a hierarchical architecture, HINT allows sampling from a joint distribution p(y,x) and the corresponding posterior p(x|y) using a single invertible network. We demonstrate the power of our method for density estimation and Bayesian inference on a novel data set of 2D shapes in Fourier parameterization, which enables consistent visualization of samples for different dimensionalities

    Using an n-zone TDI camera for acquisition of multiple images with different illuminations in a single scan

    Get PDF
    For fast scanning of large surfaces with microscopic resolution or for scanning of roll-fed material, TDI line scan cameras are typically used. TDI cameras sum up the light collected in adjacent lines of the image sensor synchronous to the motion of the object. Therefore TDI cameras have much higher sensitivity than standard line cameras. For many applications in the field of optical inspection more than one image of the object under test are needed with different illumination situations. For this task we need either more than one TDI camera or we have to scan the object several times in different illumination situations. Both solutions are often not entirely satisfying. In this paper we present a solution of this task using a modified TDI sensor consisting of three or more separate TDI zones. With this n-zone TDI camera it is possible to acquire multiple images with different illuminations in a single scan. In a simulation we demonstrate the principle of operation of the camera and the necessary image preprocessing which can be implemented in the frame grabber hardware

    A topological sampling theorem for Robust boundary reconstruction and image segmentation

    Get PDF
    AbstractExisting theories on shape digitization impose strong constraints on admissible shapes, and require error-free data. Consequently, these theories are not applicable to most real-world situations. In this paper, we propose a new approach that overcomes many of these limitations. It assumes that segmentation algorithms represent the detected boundary by a set of points whose deviation from the true contours is bounded. Given these error bounds, we reconstruct boundary connectivity by means of Delaunay triangulation and α-shapes. We prove that this procedure is guaranteed to result in topologically correct image segmentations under certain realistic conditions. Experiments on real and synthetic images demonstrate the good performance of the new method and confirm the predictions of our theory

    Positive Difference Distribution for Image Outlier Detection using Normalizing Flows and Contrastive Data

    Full text link
    Detecting test data deviating from training data is a central problem for safe and robust machine learning. Likelihoods learned by a generative model, e.g., a normalizing flow via standard log-likelihood training, perform poorly as an outlier score. We propose to use an unlabelled auxiliary dataset and a probabilistic outlier score for outlier detection. We use a self-supervised feature extractor trained on the auxiliary dataset and train a normalizing flow on the extracted features by maximizing the likelihood on in-distribution data and minimizing the likelihood on the contrastive dataset. We show that this is equivalent to learning the normalized positive difference between the in-distribution and the contrastive feature density. We conduct experiments on benchmark datasets and compare to the likelihood, the likelihood ratio and state-of-the-art anomaly detection methods

    Free-form Flows: Make Any Architecture a Normalizing Flow

    Full text link
    Normalizing Flows are generative models that directly maximize the likelihood. Previously, the design of normalizing flows was largely constrained by the need for analytical invertibility. We overcome this constraint by a training procedure that uses an efficient estimator for the gradient of the change of variables formula. This enables any dimension-preserving neural network to serve as a generative model through maximum likelihood training. Our approach allows placing the emphasis on tailoring inductive biases precisely to the task at hand. Specifically, we achieve excellent results in molecule generation benchmarks utilizing E(n)E(n)-equivariant networks. Moreover, our method is competitive in an inverse problem benchmark, while employing off-the-shelf ResNet architectures
    corecore